In the field of gait recognition from motion capture data, designinghuman-interpretable gait features is a common practice of many fellowresearchers. To refrain from ad-hoc schemes and to find maximallydiscriminative features we may need to explore beyond the limits of humaninterpretability. This paper contributes to the state-of-the-art with a machinelearning approach for extracting robust gait features directly from raw jointcoordinates. The features are learned by a modification of Linear DiscriminantAnalysis with Maximum Margin Criterion so that the identities are maximallyseparated and, in combination with an appropriate classifier, used for gaitrecognition. Experiments on the CMU MoCap database show that this methodoutperforms eight other relevant methods in terms of the distribution ofbiometric templates in respective feature spaces expressed in four classseparability coefficients. Additional experiments indicate that this method isa leading concept for rank-based classifier systems.
展开▼